mis 0
Supplementary materials - NeuMiss networks: differentiable programming for supervised learning with missing values A Proofs
Proof of Lemma 2. Identifying the second and first order terms in X we get: The last equality allows to conclude the proof. Additionally, assume that either Assumption 2 or Assumption 3 holds. This concludes the proof according to Lemma 1. Here we establish an auxiliary result, controlling the convergence of Neumann iterates to the matrix inverse. Note that Proposition A.1 can easily be extended to the general case by working with M (61) i.e., a M nonlinearity is applied to the activations.
When Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach
Driven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods.
- North America > United States (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
Extending Models Via Gradient Boosting: An Application to Mendelian Models
Huang, Theodore, Idos, Gregory, Hong, Christine, Gruber, Stephen, Parmigiani, Giovanni, Braun, Danielle
Improving existing widely-adopted prediction models is often a more efficient and robust way towards progress than training new models from scratch. Existing models may (a) incorporate complex mechanistic knowledge, (b) leverage proprietary information and, (c) have surmounted barriers to adoption. Compared to model training, model improvement and modification receive little attention. In this paper we propose a general approach to model improvement: we combine gradient boosting with any previously developed model to improve model performance while retaining important existing characteristics. To exemplify, we consider the context of Mendelian models, which estimate the probability of carrying genetic mutations that confer susceptibility to disease by using family pedigrees and health histories of family members. Via simulations we show that integration of gradient boosting with an existing Mendelian model can produce an improved model that outperforms both that model and the model built using gradient boosting alone. We illustrate the approach on genetic testing data from the USC-Stanford Cancer Genetics Hereditary Cancer Panel (HCP) study.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)